Computational RAM : Implementing Processors in Memory COMPUTATIONAL

نویسندگان

  • DUNCAN G. ELLIOTT
  • ROBERT MCKENZIE
چکیده

in-memory architecture that makes highly effective use of internal memory bandwidth by pitch-matching simple processing elements to memory columns. Computational RAM (also referred to as C•RAM) can function either as a conventional memory chip or as a SIMD (single-instruction stream, multiple-data stream) computer. When used as a memory, computational RAM is competitive with conventional DRAM in terms of access time, packaging, and cost. As a SIMD computer, computational RAM can run suitable parallel applications thousands of times faster than a CPU. Computational RAM addresses many issues that prevented previous SIMD architectures from becoming commercially successful. While the SIMD programming model is somewhat restrictive, computational RAM has applications in a number of fields, including signal and image processing, computer graphics, databases, and CAD.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Parallel Methods for Scaling Data Mining Algorithms to Large Data Sets

Section C5.10.2 introduces computational models for working with large data sets. By large we mean data that does not fit into the memory of a single processor. Parallel RAM computational models describe algorithms which are distributed between several processors. Hierarchical memory computational models describe algorithms that require working with data both in memory and on disk. Parallel dis...

متن کامل

Computational RAM : The case for SIMD computing in memory

DRAMs contain a very wide datapath internal to the chip which can deliver orders of magnitude greater memory bandwidth inside the chip than through the pins. The dilemma lies in deciding what to connect to the other end of this wide pipe to memory. We compare two alternatives: microprocessors (including MIMD multiprocessors) and SIMD processors. Microprocessors are better accepted and are suite...

متن کامل

Optimizing a CFD Fortran code for GRID Computing

Computations on clusters and computational GRIDS encounter similar situations where the processors used have different speeds and local RAM. In order to have efficient computations with processors of different speeds and local RAM, load balancing is necessary. That is, faster processors are given more work or larger domains to compute than the slower processors so that all processors finish the...

متن کامل

Reducing Computational and Memory Cost of Substructuring Technique in Finite Element Models

Substructuring in the finite element method is a technique that reduces computational cost and memory usage for analysis of complex structures. The efficiency of this technique depends on the number of substructures in different problems. Some subdivisions increase computational cost, but require little memory usage and vice versa. In the present study, the cost functions of computations and me...

متن کامل

A Message-Passing Distributed Memory Parallel Algorithm for a Dual-Code Thin Layer, Parabolized Navier-Stokes Solver

In this study, the results of parallelization of a 3-D dual code (Thin Layer, Parabolized Navier-Stokes solver) for solving supersonic turbulent flow around body and wing-body combinations are presented. As a serial code, TLNS solver is very time consuming and takes a large part of memory due to the iterative and lengthy computations. Also for complicated geometries, an exceeding number of grid...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999